augmented distillation
Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation
Automated machine learning (AutoML) can produce complex model ensembles by stacking, bagging, and boosting many individual models like trees, deep networks, and nearest neighbor estimators. While highly accurate, the resulting predictors are large, slow, and opaque as compared to their constituents. To improve the deployment of AutoML on tabular data, we propose FAST-DAD to distill arbitrarily-complex ensemble predictors into individual models like boosted trees, random forests, and deep networks. At the heart of our approach is a data augmentation strategy based on Gibbs sampling from a self-attention pseudolikelihood estimator. Across 30 datasets spanning regression and binary/multiclass classification tasks, FAST-DAD distillation produces significantly better individual models than one obtains through standard training on the original data.
Review for NeurIPS paper: Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation
I am not sure what the Green cross, diamond etc indicate, are those distilled models, and from which automl system were they obtained? Moreover, I am rather skeptical seeing only the mean. I would have loved to understand where your methods is significantly better and when does it fail, like a best-case, worst-case, average-case analysis. Reporting the mean alone can be misleading. In Section 3.1 (Maximum Pseudo-likelihood Estimation) Tabular data typically contains numerical, categorical, and text-based data.
Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation
Automated machine learning (AutoML) can produce complex model ensembles by stacking, bagging, and boosting many individual models like trees, deep networks, and nearest neighbor estimators. While highly accurate, the resulting predictors are large, slow, and opaque as compared to their constituents. To improve the deployment of AutoML on tabular data, we propose FAST-DAD to distill arbitrarily-complex ensemble predictors into individual models like boosted trees, random forests, and deep networks. At the heart of our approach is a data augmentation strategy based on Gibbs sampling from a self-attention pseudolikelihood estimator. Across 30 datasets spanning regression and binary/multiclass classification tasks, FAST-DAD distillation produces significantly better individual models than one obtains through standard training on the original data.